244 research outputs found

    Indoor Positioning Techniques Based on Wireless LAN

    Full text link
    As well as delivering high speed internet, Wireless LAN (WLAN) can be used as an effective indoor positioning system. It is competitive in terms of both accuracy and cost compared to similar systems. To date, several signal strength based techniques have been proposed. Researchers at the University of New South Wales (UNSW) have developed several innovative implementations of WLAN positioning systems. This paper describes the techniques used and details the experimental results of the research

    How Local is the Local Diversity? Reinforcing Sequential Determinantal Point Processes with Dynamic Ground Sets for Supervised Video Summarization

    Full text link
    The large volume of video content and high viewing frequency demand automatic video summarization algorithms, of which a key property is the capability of modeling diversity. If videos are lengthy like hours-long egocentric videos, it is necessary to track the temporal structures of the videos and enforce local diversity. The local diversity refers to that the shots selected from a short time duration are diverse but visually similar shots are allowed to co-exist in the summary if they appear far apart in the video. In this paper, we propose a novel probabilistic model, built upon SeqDPP, to dynamically control the time span of a video segment upon which the local diversity is imposed. In particular, we enable SeqDPP to learn to automatically infer how local the local diversity is supposed to be from the input video. The resulting model is extremely involved to train by the hallmark maximum likelihood estimation (MLE), which further suffers from the exposure bias and non-differentiable evaluation metrics. To tackle these problems, we instead devise a reinforcement learning algorithm for training the proposed model. Extensive experiments verify the advantages of our model and the new learning algorithm over MLE-based methods

    Novel application and validation of in vivo micro‐CT to study bone modelling in 3D

    Full text link
    Peer Reviewedhttps://deepblue.lib.umich.edu/bitstream/2027.42/149362/1/ocr12265.pdfhttps://deepblue.lib.umich.edu/bitstream/2027.42/149362/2/ocr12265_am.pd

    Cytotoxicity of the Roots of Trillium govanianum Against Breast (MCF7), Liver (HepG2), Lung (A549) and Urinary Bladder (EJ138) Carcinoma Cells.

    Get PDF
    Trillium govanianum Wall. (Melanthiaceae alt. Trilliaceae), commonly known as 'nag chhatri' or 'teen patra', is a native species of the Himalayas. It is used in various traditional medicines containing both steroids and sex hormones. In folk medicine, the rhizomes of T. govanianum are used to treat boils, dysentery, inflammation, menstrual and sexual disorders, as an antiseptic and in wound healing. With the only exception of the recent report on the isolation of a new steroidal saponin, govanoside A, together with three known steroidal compounds with antifungal property from this plant, there has been no systematic pharmacological and phytochemical work performed on T. govanianum. This paper reports, for the first time, on the cytotoxicity of the methanol extract of the roots of T. govanianum and its solid-phase extraction (SPE) fractions against four human carcinoma cell lines: breast (MCF7), liver (HEPG2), lung (A549) and urinary bladder (EJ138), using the 3-(4,5-dimethylthiazol-2-yl)-2,5-diphenyltetrazoliumbromide cytotoxicity assay and liquid chromatography and electrospray ionization quadrupole time-of-flight mass spectrometry analysis of the SPE fractions. The methanol extract and all SPE fractions exhibited considerable levels of cytotoxicity against all cell lines, with the IC50 values ranging between 5 and 16 µg/mL. Like other Trillium species, presence of saponins and sapogenins in the SPE fractions was evident in the liquid chromatography mass spectrometry data. Copyright © 2016 John Wiley & Sons, Ltd

    Model-based clustering of DNA methylation array data: a recursive-partitioning algorithm for high-dimensional data arising as a mixture of beta distributions

    Get PDF
    <p>Abstract</p> <p>Background</p> <p>Epigenetics is the study of heritable changes in gene function that cannot be explained by changes in DNA sequence. One of the most commonly studied epigenetic alterations is cytosine methylation, which is a well recognized mechanism of epigenetic gene silencing and often occurs at tumor suppressor gene loci in human cancer. Arrays are now being used to study DNA methylation at a large number of loci; for example, the Illumina GoldenGate platform assesses DNA methylation at 1505 loci associated with over 800 cancer-related genes. Model-based cluster analysis is often used to identify DNA methylation subgroups in data, but it is unclear how to cluster DNA methylation data from arrays in a scalable and reliable manner.</p> <p>Results</p> <p>We propose a novel model-based recursive-partitioning algorithm to navigate clusters in a beta mixture model. We present simulations that show that the method is more reliable than competing nonparametric clustering approaches, and is at least as reliable as conventional mixture model methods. We also show that our proposed method is more computationally efficient than conventional mixture model approaches. We demonstrate our method on the normal tissue samples and show that the clusters are associated with tissue type as well as age.</p> <p>Conclusion</p> <p>Our proposed recursively-partitioned mixture model is an effective and computationally efficient method for clustering DNA methylation data.</p

    Prediction of backbone dihedral angles and protein secondary structure using support vector machines

    Get PDF
    <p>Abstract</p> <p>Background</p> <p>The prediction of the secondary structure of a protein is a critical step in the prediction of its tertiary structure and, potentially, its function. Moreover, the backbone dihedral angles, highly correlated with secondary structures, provide crucial information about the local three-dimensional structure.</p> <p>Results</p> <p>We predict independently both the secondary structure and the backbone dihedral angles and combine the results in a loop to enhance each prediction reciprocally. Support vector machines, a state-of-the-art supervised classification technique, achieve secondary structure predictive accuracy of 80% on a non-redundant set of 513 proteins, significantly higher than other methods on the same dataset. The dihedral angle space is divided into a number of regions using two unsupervised clustering techniques in order to predict the region in which a new residue belongs. The performance of our method is comparable to, and in some cases more accurate than, other multi-class dihedral prediction methods.</p> <p>Conclusions</p> <p>We have created an accurate predictor of backbone dihedral angles and secondary structure. Our method, called DISSPred, is available online at <url>http://comp.chem.nottingham.ac.uk/disspred/</url>.</p

    Towards realistic benchmarks for multiple alignments of non-coding sequences

    Get PDF
    <p><b>Abstract</b></p> <p>Background</p> <p>With the continued development of new computational tools for multiple sequence alignment, it is necessary today to develop benchmarks that aid the selection of the most effective tools. Simulation-based benchmarks have been proposed to meet this necessity, especially for non-coding sequences. However, it is not clear if such benchmarks truly represent real sequence data from any given group of species, in terms of the difficulty of alignment tasks.</p> <p>Results</p> <p>We find that the conventional simulation approach, which relies on empirically estimated values for various parameters such as substitution rate or insertion/deletion rates, is unable to generate synthetic sequences reflecting the broad genomic variation in conservation levels. We tackle this problem with a new method for simulating non-coding sequence evolution, by relying on genome-wide distributions of evolutionary parameters rather than their averages. We then generate synthetic data sets to mimic orthologous sequences from the <it>Drosophila </it>group of species, and show that these data sets truly represent the variability observed in genomic data in terms of the difficulty of the alignment task. This allows us to make significant progress towards estimating the alignment accuracy of current tools in an absolute sense, going beyond only a relative assessment of different tools. We evaluate six widely used multiple alignment tools in the context of <it>Drosophila </it>non-coding sequences, and find the accuracy to be significantly different from previously reported values. Interestingly, the performance of most tools degrades more rapidly when there are more insertions than deletions in the data set, suggesting an asymmetric handling of insertions and deletions, even though none of the evaluated tools explicitly distinguishes these two types of events. We also examine the accuracy of two existing tools for annotating insertions versus deletions, and find their performance to be close to optimal in <it>Drosophila </it>non-coding sequences if provided with the true alignments.</p> <p>Conclusion</p> <p>We have developed a method to generate benchmarks for multiple alignments of <it>Drosophila </it>non-coding sequences, and shown it to be more realistic than traditional benchmarks. Apart from helping to select the most effective tools, these benchmarks will help practitioners of comparative genomics deal with the effects of alignment errors, by providing accurate estimates of the extent of these errors.</p

    Massively Parallel Haplotyping on Microscopic Beads for the High-Throughput Phase Analysis of Single Molecules

    Get PDF
    In spite of the many advances in haplotyping methods, it is still very difficult to characterize rare haplotypes in tissues and different environmental samples or to accurately assess the haplotype diversity in large mixtures. This would require a haplotyping method capable of analyzing the phase of single molecules with an unprecedented throughput. Here we describe such a haplotyping method capable of analyzing in parallel hundreds of thousands single molecules in one experiment. In this method, multiple PCR reactions amplify different polymorphic regions of a single DNA molecule on a magnetic bead compartmentalized in an emulsion drop. The allelic states of the amplified polymorphisms are identified with fluorescently labeled probes that are then decoded from images taken of the arrayed beads by a microscope. This method can evaluate the phase of up to 3 polymorphisms separated by up to 5 kilobases in hundreds of thousands single molecules. We tested the sensitivity of the method by measuring the number of mutant haplotypes synthesized by four different commercially available enzymes: Phusion, Platinum Taq, Titanium Taq, and Phire. The digital nature of the method makes it highly sensitive to detecting haplotype ratios of less than 1∶10,000. We also accurately quantified chimera formation during the exponential phase of PCR by different DNA polymerases

    Haplotype association analyses in resources of mixed structure using Monte Carlo testing

    Get PDF
    <p>Abstract</p> <p>Background</p> <p>Genomewide association studies have resulted in a great many genomic regions that are likely to harbor disease genes. Thorough interrogation of these specific regions is the logical next step, including regional haplotype studies to identify risk haplotypes upon which the underlying critical variants lie. Pedigrees ascertained for disease can be powerful for genetic analysis due to the cases being enriched for genetic disease. Here we present a Monte Carlo based method to perform haplotype association analysis. Our method, hapMC, allows for the analysis of full-length and sub-haplotypes, including imputation of missing data, in resources of nuclear families, general pedigrees, case-control data or mixtures thereof. Both traditional association statistics and transmission/disequilibrium statistics can be performed. The method includes a phasing algorithm that can be used in large pedigrees and optional use of pseudocontrols.</p> <p>Results</p> <p>Our new phasing algorithm substantially outperformed the standard expectation-maximization algorithm that is ignorant of pedigree structure, and hence is preferable for resources that include pedigree structure. Through simulation we show that our Monte Carlo procedure maintains the correct type 1 error rates for all resource types. Power comparisons suggest that transmission-disequilibrium statistics are superior for performing association in resources of only nuclear families. For mixed structure resources, however, the newly implemented pseudocontrol approach appears to be the best choice. Results also indicated the value of large high-risk pedigrees for association analysis, which, in the simulations considered, were comparable in power to case-control resources of the same sample size.</p> <p>Conclusions</p> <p>We propose hapMC as a valuable new tool to perform haplotype association analyses, particularly for resources of mixed structure. The availability of meta-association and haplotype-mining modules in our suite of Monte Carlo haplotype procedures adds further value to the approach.</p

    Dynamic tracking error with shortfall control using stochastic programming

    Get PDF
    In this contribution we tackle the issue of portfolio management combining benchmarking and risk control. We propose a dynamic tracking error problem and we consider the problem of monitoring at discrete points the shortfalls of the portfolio below a set of given reference levels of wealth.We formulate and solve the resulting dynamic optimization problem using stochastic programming. The proposed model allows for a great flexibility in the combination of the tracking goal and the downside risk protection. We provide the results of out-of-sample simulation experiments, on real data, for different portfolio configurations and different market conditions
    corecore